sensitive feature
- North America > United States > California (0.14)
- North America > United States > Virginia (0.04)
- South America > Brazil (0.04)
- (4 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.93)
- Banking & Finance > Insurance (0.68)
Supplementary Material In this supplementary, we first provide an overview of our proof techniques in Appendix A and then
Our analysis of the generalization error is based on an extension of Gordon's Gaussian process inequality [ R is a continuous function, which is convex in the first argument and concave in the second argument. The main result of CGMT is to connect the above two random optimization problems. The CGMT framework has been used to infer statistical properties of estimators in certain high-dimensional asymptotic regime. Second, derive the point-wise limit of the AO objective in terms of a convex-concave optimization problem, over only few scalar variables.
- North America > United States > California (0.14)
- North America > Canada > Newfoundland and Labrador > Newfoundland > St. John's (0.04)
- Asia > Thailand > Bangkok > Bangkok (0.04)
ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods
Numerous methods have been implemented that pursue fairness with respect to sensitive features by mitigating biases in machine learning. Yet, the problem settings that each method tackles vary significantly, including the stage of intervention, the composition of sensitive features, the fairness notion, and the distribution of the output. Even in binary classification, the greatest common denominator of problem settings is small, significantly complicating benchmarking.Hence, we introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting, enabling proper comparability between methods for any use case. We apply this benchmark to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset to sidestep the fairness-accuracy trade-off.
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- (4 more...)
- Law (0.68)
- Information Technology > Security & Privacy (0.46)
Supplementary material - ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods
We used the sex and the education of the student's parents as the sensitive attributes for this dataset. We removed all features that are other expressions of the labels (i.e. Note that this is the only folktables dataset on which we report results in the main paper. Sex, age, and rage are used as sensitive features for this datasets. We deem these features as not relevant for this use case.
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Bristol (0.04)
- (3 more...)
- Information Technology (0.46)
- Education (0.46)
Fair Regression via Plug-In Estimator and Recalibration
We study the problem of learning an optimal regression function subject to a fairness constraint. It requires that, conditionally on the sensitive feature, the distribution of the function output remains the same. This constraint naturally extends the notion of demographic parity, often used in classification, to the regression setting. We tackle this problem by leveraging on a proxy-discretized version, for which we derive an explicit expression of the optimal fair predictor. This result naturally suggests a two stage approach, in which we first estimate the (unconstrained) regression function from a set of labeled data and then we recalibrate it with another set of unlabeled data.
- North America > United States > California (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California (0.14)
- Asia > Malaysia (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- (6 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (0.92)
Beyond Verification: Abductive Explanations for Post-AI Assessment of Privacy Leakage
Sonna, Belona, Grastien, Alban, Benn, Claire
Privacy leakage in AI-based decision processes poses significant risks, particularly when sensitive information can be inferred. We propose a formal framework to audit privacy leakage using abductive explanations, which identifies minimal sufficient evidence justifying model decisions and determines whether sensitive information disclosed. Our framework formalizes both individual and system-level leakage, introducing the notion of Potentially Applicable Explanations (P AE) to identify individuals whose outcomes can shield those with sensitive features. This approach provides rigorous privacy guarantees while producing human-understandable explanations, a key requirement for auditing tools. Experimental evaluation on the German Credit Dataset illustrates how the importance of sensitive literal in the model decision process affects privacy leakage. Despite computational challenges and simplifying assumptions, our results demonstrate that abductive reasoning enables interpretable privacy auditing, offering a practical pathway to reconcile transparency, model interpretability, and privacy preserving in AI decision-making.
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)